The algorithm is that set of rules to follow in the resolution of calculations, of computer type, that is a sequence of steps, to solve a specific problem; through them, you can instruct a computer, through paradigms such as “or” or “not“.
In the Artificial Intelligence, through the correlation of millions of data sets, it is possible to imitate the human brain, hence the term neural network, and to teach it how to carry out an activity.
Different are the “abilities” of man and machine: where the first “talks – listens – reads – chats – sees“, the second “understands – reasons (does not perform calculations but evaluates and compares only hypotheses) – learns – interacts (for a future debate).
The diffusion of research in this field opens up a series of normative and ethical questions, as well as practical ones, which need to be researched for solutions and regulations. It is no coincidence that several bodies responsible for the protection of privacy in various European countries have raised the question of how to regulate the phenomenon of Artificial Intelligence and make it, as far as possible, compliant with the European Privacy Regulation.
A) The Norwegian Data Protection Authority has developed three main compliance tensions between AI and GDPR.
- EQUITY. Recital 129 of the GDPR provides that «the powers of the supervisory authorities should be exercised in accordance with appropriate procedural guarantees provided for in Union and Member State law, impartially and fairly and within a reasonable time». Furthermore, Article 12 GDPR sets out the principle of transparency, referring to the information of data subjects on the identity of the controller and the purposes of the processing, in order to ensure fair and transparent processing (rights to obtain confirmation and communication of processing of personal data concerning them). But the principles underlying the new Privacy Regulation, those of “lawfulness, correctness and transparency”, “purpose limitation”, “data minimization”, “accuracy”, “storage limitation”, “integrity and confidentiality” and “accountability“, are all set out in Article 5 of the GDPR.
ProPublica, a non-profit investigative journalism group, stated that, in the use of artificial intelligence program applied to bail, an error of “judgment” was found, cataloguing blacks “at risk of recidivism” twice as often as whites.
Another system of American origin, the C.O.M.P.A.S. (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm used by judges to calculate the probability of recidivism within two years of a crime, has been the subject of a series of appeals to the American Criminal Courts for discrimination, as the system favoured whites over African-American ethnic groups!
Neural networks, as we know, lack two exquisitely human components, understanding and intuition.
2. DATA MINIMIZATION. The second tension between the progress of the artificial intelligence and the GDPR is found in another principle enunciated by the already cited Art. 5, that is, the minimization of the data: furthermore, since it is not possible to foresee the algorithmic “outputs”, in the opinion of the Norwegian Data Protection Authority, how can the purpose of the elaboration be circumscribed, ab origine? Which data are considered necessary and which are not? The Norwegian Data Protection Authority has left all responsibility to the end user, who could protect himself through any appropriate insurance protection.
3. TRANSPARENCY. The third problem, aims at a clear and simple “communication” of the decision-making processes that underlie the functioning of the neural network, but, often and willingly, going back to this process could be either extremely complicated, or even impossible!
B) AEPD and document “Adecuaciòn al RGPD de tratamientos que incorporan Inteligencia Artificial. Una Introducciòn”
On 13 February last, the Spanish Privacy Authority also addressed the problem of Artificial Intelligence with respect to the European Privacy Regulation, conveying the points of interest around some issues dear to the GDPR, such as transparency and clarity of information to those concerned, data minimization, automated decisions or the principle of proportionality. In fact, the points on the order of attention of the AEPD are even more and should be evaluated one by one.
What is immediately clear is the willingness to regulate the IA phenomenon only with regard to the civil sector, leaving aside the military sector, as the role of Artificial Intelligence can bring benefits to society and the economy of citizens.
The points that require more regulation are the following:
- scope of application
- the information to be provided to users when interfacing with an AI phenomenon,
- BIAS, which represent “deviations” of machine learning, which, more importantly, could create problems in de facto
Although, at the beginning, a total ban on facial recognition in public places had been suggested, here the AEPD opens a debate, allowing it in case of consent of the person concerned and suggesting to find alternative methodologies in case of refusal of consent.
As the phenomenon is on the rise, the Spanish Authority suggests that efforts should focus on the responsibilities, legitimacy and purpose of data collection and the quality of the data themselves.
The efforts of researchers and legislators should all be driven towards detailed regulation as much as possible and inspired by continuous collaboration and cooperation.